Time-delay mappings constructed using neural networks have proven successfulin performing nonlinear system identification; however, because of theirdiscrete nature, their use in bifurcation analysis of continuous-time systemsis limited. This shortcoming can be avoided by embedding the neural networks ina training algorithm that mimics a numerical integrator. Both explicit andimplicit integrators can be used. The former case is based on repeatedevaluations of the network in a feedforward implementation; the latter relieson a recurrent network implementation. Here the algorithms and theirimplementation on parallel machines (SIMD and MIMD architectures) arediscussed.
展开▼